19 research outputs found

    Polarization and multiscale structural balance in signed networks

    Full text link
    Polarization is a common feature of social systems. Structural Balance Theory studies polarization of positive in-group and negative out-group ties in terms of semicycles within signed networks. However, enumerating semicycles is computationally expensive, so approximations are often needed to assess balance. Here we introduce Multiscale Semiwalk Balance (MSB) approach for quantifying the degree of balance (DoB) in (un)directed, (un)weighted signed networks by approximating semicycles with closed semiwalks. MSB allows principled selection of a range of cycle lengths appropriate for assessing DoB and interpretable under the Locality Principle (which posits that patterns in shorter cycles are crucial for balance). This flexibility overcomes several limitations affecting walk-based approximations and enables efficient, interpretable methods for measuring DoB and clustering signed networks. We demonstrate the effectiveness of our approach by applying it to real-world social systems. For instance, our methods capture increasing polarization in the U.S. Congress, which may go undetected with other methods.Comment: 29 pages; 7 figures; preprint before peer revie

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    The evolving SARS-CoV-2 epidemic in Africa: Insights from rapidly expanding genomic surveillance

    Get PDF
    INTRODUCTION Investment in Africa over the past year with regard to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) sequencing has led to a massive increase in the number of sequences, which, to date, exceeds 100,000 sequences generated to track the pandemic on the continent. These sequences have profoundly affected how public health officials in Africa have navigated the COVID-19 pandemic. RATIONALE We demonstrate how the first 100,000 SARS-CoV-2 sequences from Africa have helped monitor the epidemic on the continent, how genomic surveillance expanded over the course of the pandemic, and how we adapted our sequencing methods to deal with an evolving virus. Finally, we also examine how viral lineages have spread across the continent in a phylogeographic framework to gain insights into the underlying temporal and spatial transmission dynamics for several variants of concern (VOCs). RESULTS Our results indicate that the number of countries in Africa that can sequence the virus within their own borders is growing and that this is coupled with a shorter turnaround time from the time of sampling to sequence submission. Ongoing evolution necessitated the continual updating of primer sets, and, as a result, eight primer sets were designed in tandem with viral evolution and used to ensure effective sequencing of the virus. The pandemic unfolded through multiple waves of infection that were each driven by distinct genetic lineages, with B.1-like ancestral strains associated with the first pandemic wave of infections in 2020. Successive waves on the continent were fueled by different VOCs, with Alpha and Beta cocirculating in distinct spatial patterns during the second wave and Delta and Omicron affecting the whole continent during the third and fourth waves, respectively. Phylogeographic reconstruction points toward distinct differences in viral importation and exportation patterns associated with the Alpha, Beta, Delta, and Omicron variants and subvariants, when considering both Africa versus the rest of the world and viral dissemination within the continent. Our epidemiological and phylogenetic inferences therefore underscore the heterogeneous nature of the pandemic on the continent and highlight key insights and challenges, for instance, recognizing the limitations of low testing proportions. We also highlight the early warning capacity that genomic surveillance in Africa has had for the rest of the world with the detection of new lineages and variants, the most recent being the characterization of various Omicron subvariants. CONCLUSION Sustained investment for diagnostics and genomic surveillance in Africa is needed as the virus continues to evolve. This is important not only to help combat SARS-CoV-2 on the continent but also because it can be used as a platform to help address the many emerging and reemerging infectious disease threats in Africa. In particular, capacity building for local sequencing within countries or within the continent should be prioritized because this is generally associated with shorter turnaround times, providing the most benefit to local public health authorities tasked with pandemic response and mitigation and allowing for the fastest reaction to localized outbreaks. These investments are crucial for pandemic preparedness and response and will serve the health of the continent well into the 21st century

    Motivational specificity and threat-compensation: The effects of mortality salience on self-control

    No full text
    Self-control is the mental function that allows people to actively suppress unwanted thoughts, emotions, and urges, as well as re-prioritize their goals in accordance with situational demands. However, self-control requires effort, and exerting effort can lead to mental fatigue—a state termed “ego depletion” (Inzlicht & Schmeichel, 2012). When self-control is depleted, people are less motivated to continue exerting effort, and increasingly motivated to pursue other more rewarding activities; satisfying these motives can, in turn, restore one’s capacity for self-control. The present research investigates the idea that the salience of certain psychological threats—such as the awareness of one’s mortality—can impose limits on the sorts of behaviors that people will be motivated to engage in after exerting self-control, and, by extension, what sorts of rewards will be sufficient for restoring mental resources. Across three studies, I draw from research on TMT and the shifting-priorities model of self-control to explore this hypothesis. I investigate whether specialized threat-compensatory motives for coping with the awareness of one’s mortality can constrain the types of rewards that serve to effectively counteract the ego depletion effect. The results of these studies did not support this hypothesis, but critical methodological issues arose that prevented a proper evaluation of its accuracy. These issues are explored further within the General Discussion. I then conclude with a brief overview of the potential neural mechanisms thought to undergird the regulation of self-control and the processing of motivationally salient reward-based stimuli

    Modeling Moderators in Psychological Networks

    Get PDF
    It is an important goal for psychologists to develop and improve upon methods for describing multivariate relationships among observed variables. Psychological network models represent one class of methods for studying such relationships, and are being applied widely throughout psychological science. While these models have been shown to have a variety of diverse applications, they are limited by the fact that they currently only consider pairwise relationships among sets of variables. Specifically, they don’t take into consideration more complex relational structures such as those characterized by moderation effects and higher-order interactions. Moderation analysis, which focuses on these types of effects, is a common technique used within psychological research to help reveal the contexts and conditions under which different relationships may emerge or be observed. Thus, the goal of this research is to extend the psychological network framework to include moderator variables, as well as provide statistical tools and software to facilitate testing such models with psychological data

    Undoing a Rhetorical Metaphor: Testing the Metaphor Extension Strategy

    No full text
    Political metaphors do more than punch up messages; they can systematically bias observers\u27 attitudes toward the issue at hand. What, then, is an effective strategy for counteracting a metaphor\u27s influence? One could ignore or criticize the metaphor, emphasizing strong counterarguments directly pertaining to the target issue. Yet if observers rely on it to understand a complicated issue, they may be reluctant to abandon it. In this case, a metaphor extension strategy may be effective: Encourage observers to retain the metaphor but reinterpret its meaning by considering other, less obvious implications. The current studies support this claim. Under conditions where participants gained a strong (versus weak) epistemic benefit from a rhetorical metaphor, they were more persuaded by a rebuttal that extended (versus ignored or criticized) that metaphor. The studies use converging operational definitions of epistemic benefit and offer insight into how political attitudes are made and unmade

    An empirical evaluation of the diagnostic threshold between full-threshold and sub-threshold bulimia nervosa

    No full text
    Previous research has failed to find differences in eating disorder and general psychopathology and impairment between people with sub- and full-threshold bulimia nervosa (BN). The purpose of the current study was to test the validity of the distinction between sub- and full-threshold BN and to determine the frequency of objective binge episodes and inappropriate compensatory behaviors that would best distinguish between sub- and full-BN. Community-recruited adults (83.5% female) with current sub-threshold (n = 105) or full-threshold BN (n = 99) completed assessments of eating-disorder psychopathology, clinical impairment, internalizing problems, and drug and alcohol misuse. Receiver operating characteristic curve analysis was used to evaluate whether eating-disorder psychopathology, clinical impairment, internalizing problems, and drug and alcohol misuse could empirically discriminate between sub- and full-threshold BN. The frequency of binge episodes and inappropriate compensatory behaviors (AUC = 0.94) was “highly accurate” in discriminating between sub- and full-threshold BN; however, only objective binge episodes was a significant predictor of BN status. Internalizing symptoms (AUC = 0.71) were “moderately accurate” at distinguishing between sub- and full-BN. Neither clinical impairment (AUC = 0.60) nor drug (AUC = 0.56) or alcohol misuse (AUC = 0.52) discriminated between groups. Results suggested that 11 episodes of binge eating and 17 episodes of inappropriate compensatory behaviors optimally distinguished between sub- and full-BN. Overall, results provided mixed support for the distinction between sub- and full-threshold BN. Future research to clarify the most meaningful way to discriminate between sub- and full-threshold is warranted to improve the criterion-related validity of the diagnostic system

    Predicting probable eating disorder case-status in men using the Clinical Impairment Assessment: Evidence for a gender-specific threshold

    No full text
    The Clinical Impairment Assessment (CIA) is a widely used self-report measure of the psychosocial impairment associated with eating-disorder symptoms. Past studies recommended a global CIA score of 16 to identify clinically significant impairment associated with a probable eating disorder (ED). However, to date, research on the properties of the CIA has been conducted in majority-women samples. Preliminary research on gender differences in CIA scores suggested men with EDs report less impairment on the CIA relative to women with EDs. Thus, the purpose of this study was to test if a different impairment threshold is needed to identify cases of men with EDs. We hypothesized that a lower CIA threshold, relative to that identified in majority-women samples, would most accurately identify men with EDs. Participants (N = 162) were men from our university-based and general community-based ED participant registry who completed the CIA and Eating Disorder Diagnostic Scale. Both precision-recall and receiver operating characteristic curves assessed what CIA global score threshold most accurately identified men with EDs. Both analytic approaches indicated that a CIA global score of 13 best predicted ED case-status in men. Consistent with past research, men with a clinically significant ED appear to report lower impairment on the CIA. Results have implications for screening and assessing for substantial ED-related impairment in men. Additionally, past research using the CIA to identify men with EDs may have under-identified men with clinically significant symptoms
    corecore